Multi-Class Prediction of Obesity Risk¶
The goal of this project is to use various factors to predict obesity risk in individuals, which is related to cardiovascular disease. This is a multi classification problem which there are couples of method for dealing with this issue. Methods such as Support vector Machine (SVM), XGB classifier, LGBM classifier, TensorFlow, and etc. In this project we plan to make model base on all of these approaches and obviously we trry to improve the model for having an appropreate prediction. Let's see ...
First of all we have to import libraries that we need them in this project
import numpy as np
import pandas as pd
import os
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import *
import seaborn as sns
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow import math
Now we have to import dataset. Here I have to mention that you can find the dataset of this project in kaggle
for dirname, _, filenames in os.walk('~/Documents/2024_projects/project 2: Multi-Class Prediction of Obesity Risk'):
for filename in filenames:
print(os.path.join(dirname, filename))
train_set = pd.read_csv('~/Documents/2024_projects/project 2: Multi-Class Prediction of Obesity Risk/train.csv')
test_set = pd.read_csv('~/Documents/2024_projects/project 2: Multi-Class Prediction of Obesity Risk/test.csv')
original_data = pd.read_csv('~/Documents/2024_projects/project 2: Multi-Class Prediction of Obesity Risk/ObesityDataSet.csv')
sample = pd.read_csv('~/Documents/2024_projects/project 2: Multi-Class Prediction of Obesity Risk/sample_submission.csv')
Let's explore the dataset and see what we can find. Digging into the dataset in a way to encompasses all aspect of the problem could be worth.
Delete Rows with unknowns
train_set.isna().sum()
id 0 Gender 0 Age 0 Height 0 Weight 0 family_history_with_overweight 0 FAVC 0 FCVC 0 NCP 0 CAEC 0 SMOKE 0 CH2O 0 SCC 0 FAF 0 TUE 0 CALC 0 MTRANS 0 NObeyesdad 0 dtype: int64
original_data.isna().sum()
Gender 0 Age 0 Height 0 Weight 0 family_history_with_overweight 0 FAVC 0 FCVC 0 NCP 0 CAEC 0 SMOKE 0 CH2O 0 SCC 0 FAF 0 TUE 0 CALC 0 MTRANS 0 NObeyesdad 0 dtype: int64
Thats great! We don't need to delete any row
train_set.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 20758 entries, 0 to 20757 Data columns (total 18 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 20758 non-null int64 1 Gender 20758 non-null object 2 Age 20758 non-null float64 3 Height 20758 non-null float64 4 Weight 20758 non-null float64 5 family_history_with_overweight 20758 non-null object 6 FAVC 20758 non-null object 7 FCVC 20758 non-null float64 8 NCP 20758 non-null float64 9 CAEC 20758 non-null object 10 SMOKE 20758 non-null object 11 CH2O 20758 non-null float64 12 SCC 20758 non-null object 13 FAF 20758 non-null float64 14 TUE 20758 non-null float64 15 CALC 20758 non-null object 16 MTRANS 20758 non-null object 17 NObeyesdad 20758 non-null object dtypes: float64(8), int64(1), object(9) memory usage: 2.9+ MB
Let's have a look on classification for target that we have here. It seems that distribution of the Obisity_type_III is more common than other cases which we can see their distribution in this pie plot
fig, ax = plt.subplots(1, 1, figsize = (8,5));
x = train_set["NObeyesdad"].value_counts()
labels = ['Obesity_Type_III', 'Obesity_Type_II','Normal_Weight', 'Obesity_Type_I', 'Insufficient_Weight', 'Overweight_Level_II', 'Overweight_Level_I']
ax.pie(x = x,
labels = labels,
shadow = True,
explode = [.1 for i in range(train_set['NObeyesdad'].nunique())],
autopct = "%1.f%%",
textprops = {"size":10, "color":"black"});
ax.set_title("Obesity status in train set ", fontweight = "bold");
plt.tight_layout()
plt.show()
Now Let's have a look on numeric features and compair them againt each other by using the KDE
target = 'NObeyesdad'
num_col = []
cat_col = []
for i in train_set.columns.drop(['id',target]) :
if train_set[i].dtype == 'object' :
cat_col.append(i)
else :
num_col.append(i)
plt.figure(figsize=(30,30))
temp = num_col.copy()
temp.extend([target])
sort_label = ['Obesity_Type_III', 'Obesity_Type_II', 'Obesity_Type_I', 'Overweight_Level_II', 'Overweight_Level_I', 'Normal_Weight', 'Insufficient_Weight']
sns.pairplot(train_set[sorted(temp)],hue=target, hue_order= sort_label)
plt.show()
<Figure size 3000x3000 with 0 Axes>
Lets interpret this “seaborn.plot”. This function is used to create a matrix of scatterplots for a given dataset, where each features is compared to every other features. This function is particularly useful for exploring relationships and patterns between multiple numerical features in a dataset.
Diagonal Plots:
The diagonal plots display univariate distributions of individual features. By default, these are histograms or kernel density estimates (KDE). These plots help us understand the distribution of each feature on its own. KDE plots on the diagonal show the estimated probability density function of each feature. This can provide additional insight into the distribution of each feature. Scatterplots:Non-diagonal plots:
The non-diagonal plots are scatterplots of pairs of features against each other. Each point in the scatterplot represents a data point, and the position of the point indicates the values of the two features being compared. The points' color and shape can be used to represent additional dimensions in the dataset, such as categorical features.Correlation:
Scatterplots can reveal the relationship between two features. A positive slope indicates a positive correlation, while a negative slope indicates a negative correlation. The scatterplots can help identify trends, clusters, outliers, and any non-linear relationships between variables.Pairwise Relationships:
The pairplot provides a quick and comprehensive view of the pairwise relationships in the dataset. It is especially useful for identifying potential patterns or trends that may exist between multiple variables simultaneously.Axes and Grid:
The matrix is symmetric, with the same plots repeated on both sides of the diagonal. The lower triangle and upper triangle contain the same information, so you can focus on one half of the matrix. In summary, seaborn.pairplot is a powerful tool for gaining a visual understanding of the relationships between multiple variables in a dataset. It can help you identify patterns, trends, and potential areas for further investigation in your data analysis.
“ Here weight is significant Factor as it shows a good segregation with other features”
Moreover we can evaluate the features from another viewpoint
import seaborn as sns
for i in num_col :
plt.figure(figsize=(10,4))
sort_label = ['Obesity_Type_III', 'Obesity_Type_II', 'Obesity_Type_I', 'Overweight_Level_II', 'Overweight_Level_I', 'Normal_Weight', 'Insufficient_Weight']
sns.violinplot(data=train_set, x=target, y=i, order=sort_label)
plt.xticks(rotation= 30)
plt.show()
Cheching correlation between feature could be useful
# Create a subset correlation matrix for the selected features
subset_correlation_matrix = train_set[num_col].corr()
# plot the heatmap
plt.figure(figsize=(10,7))
sns.heatmap(
data = subset_correlation_matrix,
annot = True,
fmt=".2f",
mask=(np.triu(np.ones_like(subset_correlation_matrix)))
)
plt.show()
It seems that happily there is no noticable high correlation in this map. In fact there is no high correlation among different features which means that hopefuly in this model we dont have any special complexity
Now we can make dataset. First of all we concatenate train_set and original_data
train = pd.concat([train_set, original_data]).drop(['id'], axis = 1).drop_duplicates()
test = test_set.drop(['id'], axis = 1)
Here we a have different types of data like string, float and integer. Obviously, having numeric data could be useful for having a high quality model. Therefore we have to have some change in our data and change string data to the numeric data base on a special approach. One way to manually changing the data from string to float is this
train.loc[train['Gender'] == 'Female', 'Gender'] = 0.0
train.loc[train['Gender'] == 'Male', 'Gender'] = 1.0
train.loc[train['family_history_with_overweight'] == 'yes', 'family_history_with_overweight'] = 0.0
train.loc[train['family_history_with_overweight'] == 'no', 'family_history_with_overweight'] = 1.0
train.loc[train['FAVC'] == 'yes', 'FAVC'] = 0.0
train.loc[train['FAVC'] == 'no', 'FAVC'] = 1.0
train.loc[train['CAEC'] == 'Sometimes', 'CAEC'] = 0.25
train.loc[train['CAEC'] == 'Frequently', 'CAEC'] = 0.50
train.loc[train['CAEC'] == 'Always', 'CAEC'] = 0.75
train.loc[train['CAEC'] == 'no', 'CAEC'] = 1.0
train.loc[train['SMOKE'] == 'yes', 'SMOKE'] = 0.0
train.loc[train['SMOKE'] == 'no', 'SMOKE'] = 1.0
train.loc[train['SCC'] == 'yes', 'SCC'] = 0.0
train.loc[train['SCC'] == 'no', 'SCC'] = 1.0
train.loc[train['CALC'] == 'Sometimes', 'CALC'] = 0.25
train.loc[train['CALC'] == 'Frequently', 'CALC'] = 0.50
train.loc[train['CALC'] == 'Always', 'CALC'] = 0.75
train.loc[train['CALC'] == 'no', 'CALC'] = 1.0
train.loc[train['MTRANS'] == 'Public_Transportation', 'MTRANS'] = 0.0
train.loc[train['MTRANS'] == 'Automobile', 'MTRANS'] = 0.25
train.loc[train['MTRANS'] == 'Walking', 'MTRANS'] = 0.50
train.loc[train['MTRANS'] == 'Motorbike', 'MTRANS'] = 0.75
train.loc[train['MTRANS'] == 'Bike', 'MTRANS'] = 1.0
train.loc[train['NObeyesdad'] == 'Obesity_Type_III', 'NObeyesdad'] = 0.0
train.loc[train['NObeyesdad'] == 'Obesity_Type_II', 'NObeyesdad'] = 0.16
train.loc[train['NObeyesdad'] == 'Obesity_Type_I', 'NObeyesdad'] = 0.33
train.loc[train['NObeyesdad'] == 'Overweight_Level_II', 'NObeyesdad'] = 0.5
train.loc[train['NObeyesdad'] == 'Overweight_Level_I', 'NObeyesdad'] = 0.66
train.loc[train['NObeyesdad'] == 'Normal_Weight', 'NObeyesdad'] = 0.83
train.loc[train['NObeyesdad'] == 'Insufficient_Weight', 'NObeyesdad'] = 1.0
Anothe way for featur encoding
label_encoder = LabelEncoder()
encoders = {
'Gender': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1),
'family_history_with_overweight': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1),
'FAVC': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1),
'CAEC': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1),
'SMOKE': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1),
'SCC': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1),
'CALC': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1),
'MTRANS': OrdinalEncoder(categories='auto', handle_unknown='use_encoded_value', unknown_value=-1)
}
train["NObeyesdad"] = label_encoder.fit_transform(train["NObeyesdad"])
for feat, enc in encoders.items():
train[feat] = enc.fit_transform(train[[feat]]).astype('int32')
test[feat] = enc.transform(test[[feat]]).astype('int32')
Now its the time for Normalizing the data. Normalizing the data could be useful in a way to dealing with a certain range and avoiding the error.
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler
ct = make_column_transformer(
(MinMaxScaler(), ['Age', 'Height', 'Weight', 'family_history_with_overweight','FAVC', 'FCVC','NCP','CAEC','SMOKE', 'CH2O','SCC', 'FAF', 'TUE','CALC','CALC','CALC'])
)
X = train.drop('NObeyesdad', axis = 1) # Features
y = train['NObeyesdad'] # Labels
X_train , X_test , y_train, y_test = train_test_split(X, y, train_size = 0.7, random_state = 42)
ct.fit(X_train)
X_train_normal = ct.transform(X_train)
X_test_normal = ct.transform(X_test)
Lets make a model with support vector machine
from sklearn.svm import SVC
clf = SVC(kernel='rbf')
clf.fit(X_train, y_train)
SVC()In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
SVC()
y_pred = clf.predict(X_test);
from sklearn.metrics import accuracy_score
accuracy_svm = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy_svm * 100:.2f}%")
Accuracy: 71.05%
It is better to choose another method like XGBoost
from xgboost import XGBClassifier
xgb_classifier = XGBClassifier(learning_rate=0.1, n_estimators= 200, max_depth = 3, min_child_weight = 10)
xgb_classifier.fit(X_train_normal, y_train)
y_pred = xgb_classifier.predict(X_test_normal)
accuracy_xgboost = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy_xgboost * 100:.2f}%")
Accuracy: 90.62%
lets try LGBMclassifier
from lightgbm import LGBMClassifier
params = {
"objective": "multiclass", # Objective function for the model
"metric": "multi_logloss", # Evaluation metric
"verbosity": -1, # Verbosity level (-1 for silent)
"boosting_type": "gbdt", # Gradient boosting type
"random_state": 42, # Random state for reproducibility
"num_class": 7, # Number of classes in the dataset
'learning_rate': 0.01197852738297134, # Learning rate for gradient boosting
'n_estimators': 1000, # Number of boosting iterations
'lambda_l1': 0.009715116714365275, # L1 regularization term
'lambda_l2': 0.03853395161282091, # L2 regularization term
'max_depth': 11, # Maximum depth of the trees
'colsample_bytree': 0.7364306508830604, # Fraction of features to consider for each tree
'subsample': 0.9529973839959326, # Fraction of samples to consider for each boosting iteration
'min_child_samples': 17 # Minimum number of data needed in a leaf
}
lgbm_classifier = LGBMClassifier(**params)
lgbm_classifier.fit(X_train, y_train)
y_pred = lgbm_classifier.predict(X_test)
accuracy_score(y_test, y_pred)
0.9162532827545958
sample['NObeyesdad'] = label_encoder.inverse_transform(list(y_pred))
sample.to_csv("submission.csv",index=False)
sample.head(10)
| id | NObeyesdad | |
|---|---|---|
| 0 | 20758 | Obesity_Type_II |
| 1 | 20759 | Overweight_Level_I |
| 2 | 20760 | Obesity_Type_III |
| 3 | 20761 | Obesity_Type_I |
| 4 | 20762 | Obesity_Type_III |
| 5 | 20763 | Insufficient_Weight |
| 6 | 20764 | Insufficient_Weight |
| 7 | 20765 | Normal_Weight |
| 8 | 20766 | Overweight_Level_II |
| 9 | 20767 | Normal_Weight |
Let's have a look on feature importance
# feture importance
feature_importance = lgbm_classifier.feature_importances_
feature_importance_df = pd.DataFrame({'Feature': X.columns, 'Importance':feature_importance})
feature_importance_df = feature_importance_df.sort_values(by='Importance', ascending =False)
plt.figure(figsize = (12,10));
sns.barplot(x = 'Importance', y = 'Feature', data=feature_importance_df, color = 'green');
plt.title('Feature importance');
plt.xlabel('Importance');
sns.despine(left = True, bottom = True)
plt.show()
Let's deal to this problem from the Tensor Flow viewpoint
How do we improve Prediction/Model?
- Fit on more data
- Try different activation functions: Helps decide what data is most important
- Try different optimization functions: Optimize by smothing data to expected results
- Increase hidden units: Add weightings to get desired output
- Increase Learning rate: How much to change the model based on estimated error
- Fit longer: Increase the number of times we try to find the data
biuld the tensorflow model
tf.random.set_seed(66)
# Create the model
# We have to flatten the 28x28 image
# Create a 4 value tensor for each of the 784 cells
# Optimize and create another 4 value tensor
# Output is 1 of the 10 possible labels
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(16,)), # this is the number of features that describe a target
tf.keras.layers.Dense(4, activation="relu"),
tf.keras.layers.Dense(4, activation="relu"),
tf.keras.layers.Dense(4, activation="tanh"),
tf.keras.layers.Dense(4, activation="tanh"),
tf.keras.layers.Dense(7, activation="relu"),
tf.keras.layers.Dense(7, activation="softmax")
])
# Use SparseCategoricalCrossentropy if data isn't
# normalized or one-hot encoded
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit the model
history = model.fit(X_train,
y_train,
epochs=100,
validation_data=(X_test, y_test))
2024-02-28 13:56:19.870993: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2024-02-28 13:56:19.871028: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2024-02-28 13:56:19.871054: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (mehran-Precision-Tower-3620): /proc/driver/nvidia/version does not exist 2024-02-28 13:56:19.871692: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Epoch 1/100 500/500 [==============================] - 1s 2ms/step - loss: 1.8051 - accuracy: 0.2460 - val_loss: 1.5663 - val_accuracy: 0.3939 Epoch 2/100 500/500 [==============================] - 1s 1ms/step - loss: 1.3879 - accuracy: 0.4781 - val_loss: 1.2669 - val_accuracy: 0.5646 Epoch 3/100 500/500 [==============================] - 1s 1ms/step - loss: 1.0975 - accuracy: 0.6103 - val_loss: 0.9877 - val_accuracy: 0.6490 Epoch 4/100 500/500 [==============================] - 1s 1ms/step - loss: 0.8839 - accuracy: 0.6517 - val_loss: 0.7976 - val_accuracy: 0.6599 Epoch 5/100 500/500 [==============================] - 1s 1ms/step - loss: 0.7510 - accuracy: 0.6961 - val_loss: 0.6981 - val_accuracy: 0.7136 Epoch 6/100 500/500 [==============================] - 1s 1ms/step - loss: 0.6733 - accuracy: 0.7255 - val_loss: 0.6299 - val_accuracy: 0.7457 Epoch 7/100 500/500 [==============================] - 1s 1ms/step - loss: 0.6039 - accuracy: 0.7546 - val_loss: 0.5803 - val_accuracy: 0.7609 Epoch 8/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5735 - accuracy: 0.7622 - val_loss: 0.5973 - val_accuracy: 0.7561 Epoch 9/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5611 - accuracy: 0.7656 - val_loss: 0.5754 - val_accuracy: 0.7571 Epoch 10/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5526 - accuracy: 0.7683 - val_loss: 0.5472 - val_accuracy: 0.7752 Epoch 11/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5460 - accuracy: 0.7694 - val_loss: 0.5454 - val_accuracy: 0.7711 Epoch 12/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5447 - accuracy: 0.7714 - val_loss: 0.5408 - val_accuracy: 0.7768 Epoch 13/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5429 - accuracy: 0.7730 - val_loss: 0.5415 - val_accuracy: 0.7730 Epoch 14/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5382 - accuracy: 0.7769 - val_loss: 0.5463 - val_accuracy: 0.7715 Epoch 15/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5359 - accuracy: 0.7763 - val_loss: 0.5377 - val_accuracy: 0.7749 Epoch 16/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5346 - accuracy: 0.7758 - val_loss: 0.5383 - val_accuracy: 0.7755 Epoch 17/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5326 - accuracy: 0.7776 - val_loss: 0.5333 - val_accuracy: 0.7814 Epoch 18/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5307 - accuracy: 0.7789 - val_loss: 0.5298 - val_accuracy: 0.7809 Epoch 19/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5290 - accuracy: 0.7808 - val_loss: 0.5307 - val_accuracy: 0.7803 Epoch 20/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5317 - accuracy: 0.7784 - val_loss: 0.5370 - val_accuracy: 0.7759 Epoch 21/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5262 - accuracy: 0.7821 - val_loss: 0.5278 - val_accuracy: 0.7814 Epoch 22/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5244 - accuracy: 0.7828 - val_loss: 0.5253 - val_accuracy: 0.7851 Epoch 23/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5222 - accuracy: 0.7844 - val_loss: 0.5253 - val_accuracy: 0.7855 Epoch 24/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5230 - accuracy: 0.7841 - val_loss: 0.5356 - val_accuracy: 0.7835 Epoch 25/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5215 - accuracy: 0.7843 - val_loss: 0.5217 - val_accuracy: 0.7880 Epoch 26/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5205 - accuracy: 0.7860 - val_loss: 0.5202 - val_accuracy: 0.7893 Epoch 27/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5231 - accuracy: 0.7831 - val_loss: 0.5206 - val_accuracy: 0.7895 Epoch 28/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5187 - accuracy: 0.7879 - val_loss: 0.5174 - val_accuracy: 0.7918 Epoch 29/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5151 - accuracy: 0.7924 - val_loss: 0.5156 - val_accuracy: 0.7941 Epoch 30/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5148 - accuracy: 0.7894 - val_loss: 0.5293 - val_accuracy: 0.7874 Epoch 31/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5139 - accuracy: 0.7916 - val_loss: 0.5132 - val_accuracy: 0.7925 Epoch 32/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5145 - accuracy: 0.7899 - val_loss: 0.5152 - val_accuracy: 0.7950 Epoch 33/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5128 - accuracy: 0.7944 - val_loss: 0.5622 - val_accuracy: 0.7685 Epoch 34/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5130 - accuracy: 0.7904 - val_loss: 0.5338 - val_accuracy: 0.7848 Epoch 35/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5095 - accuracy: 0.7944 - val_loss: 0.5190 - val_accuracy: 0.7917 Epoch 36/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5096 - accuracy: 0.7946 - val_loss: 0.5095 - val_accuracy: 0.7982 Epoch 37/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5067 - accuracy: 0.7993 - val_loss: 0.5132 - val_accuracy: 0.7941 Epoch 38/100 500/500 [==============================] - 1s 1ms/step - loss: 0.5075 - accuracy: 0.7969 - val_loss: 0.5075 - val_accuracy: 0.7997 Epoch 39/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5080 - accuracy: 0.7978 - val_loss: 0.5043 - val_accuracy: 0.8010 Epoch 40/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5018 - accuracy: 0.8011 - val_loss: 0.5035 - val_accuracy: 0.8030 Epoch 41/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5044 - accuracy: 0.7996 - val_loss: 0.5048 - val_accuracy: 0.8001 Epoch 42/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5072 - accuracy: 0.7983 - val_loss: 0.5016 - val_accuracy: 0.8041 Epoch 43/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5035 - accuracy: 0.8003 - val_loss: 0.5076 - val_accuracy: 0.7994 Epoch 44/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5015 - accuracy: 0.8006 - val_loss: 0.5040 - val_accuracy: 0.8054 Epoch 45/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4995 - accuracy: 0.8024 - val_loss: 0.5104 - val_accuracy: 0.7994 Epoch 46/100 500/500 [==============================] - 1s 2ms/step - loss: 0.5032 - accuracy: 0.7986 - val_loss: 0.5247 - val_accuracy: 0.7902 Epoch 47/100 500/500 [==============================] - 1s 3ms/step - loss: 0.4999 - accuracy: 0.8027 - val_loss: 0.5053 - val_accuracy: 0.8013 Epoch 48/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4955 - accuracy: 0.8035 - val_loss: 0.5008 - val_accuracy: 0.8030 Epoch 49/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4972 - accuracy: 0.8036 - val_loss: 0.5132 - val_accuracy: 0.7966 Epoch 50/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4950 - accuracy: 0.8031 - val_loss: 0.5027 - val_accuracy: 0.8065 Epoch 51/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4945 - accuracy: 0.8035 - val_loss: 0.4940 - val_accuracy: 0.8111 Epoch 52/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4913 - accuracy: 0.8076 - val_loss: 0.4956 - val_accuracy: 0.8064 Epoch 53/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4949 - accuracy: 0.8034 - val_loss: 0.4976 - val_accuracy: 0.8084 Epoch 54/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4910 - accuracy: 0.8088 - val_loss: 0.4992 - val_accuracy: 0.8046 Epoch 55/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4909 - accuracy: 0.8049 - val_loss: 0.5035 - val_accuracy: 0.8048 Epoch 56/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4896 - accuracy: 0.8083 - val_loss: 0.4954 - val_accuracy: 0.8068 Epoch 57/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4893 - accuracy: 0.8078 - val_loss: 0.4921 - val_accuracy: 0.8134 Epoch 58/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4896 - accuracy: 0.8107 - val_loss: 0.5257 - val_accuracy: 0.7893 Epoch 59/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4891 - accuracy: 0.8084 - val_loss: 0.4975 - val_accuracy: 0.8046 Epoch 60/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4893 - accuracy: 0.8095 - val_loss: 0.4953 - val_accuracy: 0.8067 Epoch 61/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4858 - accuracy: 0.8088 - val_loss: 0.5684 - val_accuracy: 0.7693 Epoch 62/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4879 - accuracy: 0.8079 - val_loss: 0.4959 - val_accuracy: 0.8106 Epoch 63/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4853 - accuracy: 0.8100 - val_loss: 0.4977 - val_accuracy: 0.8070 Epoch 64/100 500/500 [==============================] - 1s 1ms/step - loss: 0.4835 - accuracy: 0.8119 - val_loss: 0.4926 - val_accuracy: 0.8099 Epoch 65/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4818 - accuracy: 0.8139 - val_loss: 0.4945 - val_accuracy: 0.8071 Epoch 66/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4834 - accuracy: 0.8131 - val_loss: 0.5032 - val_accuracy: 0.8041 Epoch 67/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4812 - accuracy: 0.8135 - val_loss: 0.4899 - val_accuracy: 0.8121 Epoch 68/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4838 - accuracy: 0.8106 - val_loss: 0.4899 - val_accuracy: 0.8115 Epoch 69/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4801 - accuracy: 0.8151 - val_loss: 0.4859 - val_accuracy: 0.8134 Epoch 70/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4792 - accuracy: 0.8110 - val_loss: 0.4968 - val_accuracy: 0.8068 Epoch 71/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4790 - accuracy: 0.8133 - val_loss: 0.4840 - val_accuracy: 0.8144 Epoch 72/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4799 - accuracy: 0.8139 - val_loss: 0.4892 - val_accuracy: 0.8116 Epoch 73/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4776 - accuracy: 0.8151 - val_loss: 0.4803 - val_accuracy: 0.8165 Epoch 74/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4768 - accuracy: 0.8151 - val_loss: 0.4936 - val_accuracy: 0.8081 Epoch 75/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4792 - accuracy: 0.8149 - val_loss: 0.4792 - val_accuracy: 0.8170 Epoch 76/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4768 - accuracy: 0.8160 - val_loss: 0.4822 - val_accuracy: 0.8169 Epoch 77/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4780 - accuracy: 0.8156 - val_loss: 0.4808 - val_accuracy: 0.8173 Epoch 78/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4712 - accuracy: 0.8183 - val_loss: 0.4832 - val_accuracy: 0.8198 Epoch 79/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4723 - accuracy: 0.8171 - val_loss: 0.4788 - val_accuracy: 0.8169 Epoch 80/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4740 - accuracy: 0.8165 - val_loss: 0.4839 - val_accuracy: 0.8192 Epoch 81/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4695 - accuracy: 0.8199 - val_loss: 0.4832 - val_accuracy: 0.8176 Epoch 82/100 500/500 [==============================] - 1s 1ms/step - loss: 0.4722 - accuracy: 0.8188 - val_loss: 0.4892 - val_accuracy: 0.8151 Epoch 83/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4756 - accuracy: 0.8157 - val_loss: 0.4797 - val_accuracy: 0.8201 Epoch 84/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4700 - accuracy: 0.8191 - val_loss: 0.4763 - val_accuracy: 0.8201 Epoch 85/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4697 - accuracy: 0.8183 - val_loss: 0.4787 - val_accuracy: 0.8201 Epoch 86/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4722 - accuracy: 0.8166 - val_loss: 0.5014 - val_accuracy: 0.8043 Epoch 87/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4697 - accuracy: 0.8178 - val_loss: 0.4770 - val_accuracy: 0.8175 Epoch 88/100 500/500 [==============================] - 1s 1ms/step - loss: 0.4710 - accuracy: 0.8201 - val_loss: 0.4740 - val_accuracy: 0.8224 Epoch 89/100 500/500 [==============================] - 1s 1ms/step - loss: 0.4670 - accuracy: 0.8206 - val_loss: 0.4886 - val_accuracy: 0.8119 Epoch 90/100 500/500 [==============================] - 1s 1ms/step - loss: 0.4695 - accuracy: 0.8173 - val_loss: 0.4780 - val_accuracy: 0.8211 Epoch 91/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4649 - accuracy: 0.8221 - val_loss: 0.4774 - val_accuracy: 0.8242 Epoch 92/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4635 - accuracy: 0.8215 - val_loss: 0.4831 - val_accuracy: 0.8154 Epoch 93/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4654 - accuracy: 0.8220 - val_loss: 0.4768 - val_accuracy: 0.8182 Epoch 94/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4668 - accuracy: 0.8211 - val_loss: 0.4781 - val_accuracy: 0.8178 Epoch 95/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4638 - accuracy: 0.8221 - val_loss: 0.4676 - val_accuracy: 0.8259 Epoch 96/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4641 - accuracy: 0.8242 - val_loss: 0.4724 - val_accuracy: 0.8194 Epoch 97/100 500/500 [==============================] - 1s 3ms/step - loss: 0.4622 - accuracy: 0.8228 - val_loss: 0.4673 - val_accuracy: 0.8256 Epoch 98/100 500/500 [==============================] - 1s 2ms/step - loss: 0.4648 - accuracy: 0.8213 - val_loss: 0.4901 - val_accuracy: 0.8087 Epoch 99/100 500/500 [==============================] - 1s 3ms/step - loss: 0.4622 - accuracy: 0.8237 - val_loss: 0.4685 - val_accuracy: 0.8248 Epoch 100/100 500/500 [==============================] - 1s 3ms/step - loss: 0.4620 - accuracy: 0.8211 - val_loss: 0.4871 - val_accuracy: 0.8172
pd.DataFrame(history.history).plot()
<AxesSubplot:>